Incremental class learning using variational autoencoders with similarity learning

نویسندگان

چکیده

Abstract Catastrophic forgetting in neural networks during incremental learning remains a challenging problem. Previous research investigated catastrophic fully connected networks, with some earlier work exploring activation functions and algorithms. Applications of have been extended to include similarity learning. Understanding how loss would be affected by is significant interest. Our investigates for four well-known similarity-based class The are Angular, Contrastive, Center, Triplet loss. results show that the rate differs across on multiple datasets. Angular was least affected, followed loss, Center good mining techniques. We implemented three existing techniques, iCaRL, EWC, EBLL. further proposed novel technique using Variational Autoencoders (VAEs) generate representation as exemplars passed through network’s intermediate layers. method outperformed state-of-the-art one does not require stored images (exemplars) generated representations from VAEs help preserve regions embedding space used prior knowledge so new “overwrite” it.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Online Incremental Feature Learning with Denoising Autoencoders

While determining model complexity is an important problem in machine learning, many feature learning algorithms rely on cross-validation to choose an optimal number of features, which is usually challenging for online learning from a massive stream of data. In this paper, we propose an incremental feature learning algorithm to determine the optimal model complexity for large-scale, online data...

متن کامل

Supplementary Material: Online Incremental Feature Learning with Denoising Autoencoders

Roughly speaking, this update rule is based on the following idea: increase the number of feature increments when the performance improves (i.e., the model is not at optimum), and decrease the number of feature increments when there is minimal or no performance improvement (i.e., the model has converged). From this intuition, we consider the following update rule (referred to as “update rule I”):

متن کامل

Learning Text Pair Similarity with Context-sensitive Autoencoders

We present a pairwise context-sensitive Autoencoder for computing text pair similarity. Our model encodes input text into context-sensitive representations and uses them to compute similarity between text pairs. Our model outperforms the state-of-the-art models in two semantic retrieval tasks and a contextual word similarity task. For retrieval, our unsupervised approach that merely ranks input...

متن کامل

Variational Autoencoders for Learning Latent Representations of Speech Emotion

Learning the latent representation of data in unsupervised fashion is a very interesting process that provides relevant features for enhancing the performance of a classifier. For speech emotion recognition tasks, generating effective features is crucial. Currently, handcrafted features are mostly used for speech emotion recognition, however, features learned automatically using deep learning h...

متن کامل

Image Tranformation Using Variational Autoencoders

The way data are stored in a computer is definitively not the most intelligible approach that one can think about even though it makes computation and communication very convenient. This issue is essentially equivalent to dimensionality reduction problem under the assumption that the data can be embedded into a low-dimensional smooth manifold (Olah [2014]). We have seen couple of examples in th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neural Computing and Applications

سال: 2023

ISSN: ['0941-0643', '1433-3058']

DOI: https://doi.org/10.1007/s00521-023-08485-1